With the fast development of Machine Translation (MT) systems, especially the new boost from Neural MT (NMT) models, the MT output quality has reached a new level of accuracy. However, many researchers criticised that the current popular evaluation metrics such as BLEU can not correctly distinguish the state-of-the-art NMT systems regarding quality differences. In this short paper, we describe the design and implementation of a linguistically motivated human-in-the-loop evaluation metric looking into idiomatic and terminological Multi-word Expressions (MWEs). MWEs have played a bottleneck in many Natural Language Processing (NLP) tasks including MT. MWEs can be used as one of the main factors to distinguish different MT systems by looking into their capabilities in recognising and translating MWEs in an accurate and meaning equivalent manner.
translated by 谷歌翻译
预训练的语言模型(PLM)通常会利用单语和多语言数据集的优势,该数据集可以在线免费获得,以在部署到特定任务中之前获取一般或混合域知识。最近提出了超大型PLM(XLPLM),以声称对较小尺寸的PLM(例如机器翻译(MT)任务)声称最高性能。这些XLPLM包括Meta-AI的WMT21密度24宽-EN-X和NLLB。 \ textIt {在这项工作中,我们检查XLPLM是否绝对优于较小尺寸的PLM,在针对特定域的MTS中进行微调。}我们使用了不同大小的两个不同的内域数据:商业自动化内部数据和\ textbf {临床}在WMT2022上共享了Clinspen2022挑战的任务数据。我们选择受欢迎的玛丽安·赫尔辛基(Marian Helsinki)作为较小尺寸的PLM和来自Meta-AI的两个大型大型转换器作为XLPLM。我们的实验研究表明,1)在较小尺寸的内域商业汽车数据上,XLPLM WMT21密度24宽24宽-EN-X确实显示出使用S \ TextSc {acre} BLEU和HLEU指标的评估得分要好得多。玛丽安(Marian),即使其得分提高率低于微调后的玛丽安(Marian); 2)在相对较大尺寸的精心准备的临床数据微调上,XLPLM NLLB \ textbf {倾向于失去}其优于较小尺寸的Marian在两个子任务(临床术语和本体概念)上使用Clinspen提供的指标Meteor,Meteor,Marian的优势。 Comet和Rouge-L,并且在所有指标上完全输给了Marian,包括S \ textsc {acre} bleu and Bleu; 3)\ textbf {指标并不总是同意}在相同的任务上使用相同的模型输出相互同意。
translated by 谷歌翻译
由于它们的低准确性,透明度缺乏透明度,而不是语义,而不是语义,而不是语言技能,而不是语义,而且与人类质量评估的普遍挑剔,机器翻译的传统自动评估度量被语言学家被广泛批评。 MQM样记录形式的人类评估始终是客户和翻译服务提供商(TSP)的真实行业环境中进行的。然而,传统的人类翻译质量评估昂贵才能实现和进入伟大的语言细节,提出对帧间可靠性(IRR)的问题,并且不设计用于衡量比优质质量翻译更糟糕的质量。在这项工作中,我们介绍了希望,基于专业后编辑注释的机器翻译输出的主导和以人为际的评估框架。它仅包含有限数量的常见错误类型,并使用评分模型与错误惩罚点(EPP)的几何进度反映了每个转换单元的错误严重性级别。来自高技术域的英语语言对MT输出的初始实验工作来自高技术领域的营销内容类型的文本揭示了我们的评估框架在反映了关于整体系统级性能和段级透明度的MT输出质量方面非常有效,并且它会增加错误类型解释。该方法具有若干关键优势,例如测量和比较少于不同系统的完美MT输出的能力,表明人类对质量的能力,立即估算所需的劳动力估算,使MT输出到优质的质量,低成本和更快的应用,以及更高的IRR。我们的实验数据可用于\ url {https://github.com/lhan87/hope}。
translated by 谷歌翻译
来自人类翻译(HT)和机器翻译(MT)研究人员的观点,翻译质量评估(TQE)是一个必不可少的任务。翻译服务提供商(TSP)必须提供大量翻译,满足客户规范,在紧张的时间框架和成本中具有苛刻的质量水平的严厉约束。 MT研究人员努力使其型号更好,这也需要可靠的质量评估。虽然自动化机器翻译评估(MTE)指标和质量估算(QE)工具广泛可用且易于访问,但现有的自动化工具不够好,并且来自专业翻译人员(HAP)的人为评估通常被选为金标准\ CITE {Han-Etal-2021-TQA}。然而,人类评估通常被指控具有低可靠性和协议。这是由主观性或统计造成的吗?如何避免待检查的整个文本,从成本和效率的角度来看,以及转换文本的最佳样本大小是什么,从而可靠地估计整个材料的翻译质量?这项工作执行了这种激励的研究,以正确估计置信区间\ Cite {Brown_Etal2001Interval},具体取决于翻译文本的样本大小,例如,例如:单词或句子的数量,需要在TQE工作流程上处理,以实现对整体翻译质量的自信和可靠的评估。我们申请这项工作的方法来自伯努利统计分布建模(BSDM)和蒙特卡罗采样分析(MCSA)。
translated by 谷歌翻译
人类评估一直昂贵,而研究人员则努力信任自动指标。为了解决这个问题,我们建议通过采取预先接受训练的语言模型(PLM)和有限的人类标记分数来定制传统指标。我们首先重新介绍Hlepor度量因子,然后是我们开发的Python版本(移植),这实现了Hlepor度量中的加权参数的自动调整。然后我们介绍了使用Optuna超参数优化框架的定制Hlepor(Cushlepor),以便更好地协议为预先接受训练的语言模型(使用Labse),这是关于Cushlepor的确切MT语言对。我们还在英语 - 德语和汉英语言对基于MQM和PSQM框架的专业人体评估数据进行了优化的曲位波。实验研究表明,Cushlepor可以提升Hlepor对PLMS的更好的表演,如Labse,如Labse的更好的成本,以及更好的人类评估协议,包括MQM和PSQM得分,并且比Bleu(AT \ URL的数据提供更好的表演(HTTPS:// github.com/poethan/cushlepor})。官方结果表明,我们的提交赢得了三种语言对,包括\ textbf {英语 - 德语}和\ textbf {中文 - 英文}通过cushlepor(lm)和\ textbf {英语 - 俄语}上\ textit {通过hlepor ted}域。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译
Deep neural networks are vulnerable to adversarial attacks. In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from. Techniques derived would aid forensic investigation of attack incidents and serve as deterrence to potential attacks. We consider the buyers-seller setting where a machine learning model is to be distributed to various buyers and each buyer receives a slightly different copy with same functionality. A malicious buyer generates adversarial examples from a particular copy $\mathcal{M}_i$ and uses them to attack other copies. From these adversarial examples, the investigator wants to identify the source $\mathcal{M}_i$. To address this problem, we propose a two-stage separate-and-trace framework. The model separation stage generates multiple copies of a model for a same classification task. This process injects unique characteristics into each copy so that adversarial examples generated have distinct and traceable features. We give a parallel structure which embeds a ``tracer'' in each copy, and a noise-sensitive training loss to achieve this goal. The tracing stage takes in adversarial examples and a few candidate models, and identifies the likely source. Based on the unique features induced by the noise-sensitive loss function, we could effectively trace the potential adversarial copy by considering the output logits from each tracer. Empirical results show that it is possible to trace the origin of the adversarial example and the mechanism can be applied to a wide range of architectures and datasets.
translated by 谷歌翻译